![]() LOT PROCESSING ORDERING MECHANISM
专利摘要:
The present invention relates to a device which comprises at least one computer machine and software for implementing a mechanism for creating schedules, batch processes, optimized for a given IT infrastructure and which consists of: - a hardware and software arrangement storage of a resource and regulatory repository, - a module for building an optimized execution schedule with all the elements collected by the resource repository and in compliance with the execution rules determined by the regulatory reference system. 公开号:FR3038405A1 申请号:FR1556274 申请日:2015-07-02 公开日:2017-01-06 发明作者:Damien Aiello;Bruno Demeilliez;Christophe Germain 申请人:Bull SA; IPC主号:
专利说明:
Batch scheduling mechanism TECHNICAL FIELD OF THE INVENTION The present invention relates to the field of batch processing ("batch" in English) and more particularly to the optimization of their scheduling. Scheduling is defined as the order of execution of the different processes. This scheduling is calculated according to rules and / or prerequisites. In the following, a grid indicates a virtual infrastructure consisting of a set of potentially shared, distributed, heterogeneous, relocated and autonomous IT resources. A grid is indeed an infrastructure, that is to say technical equipment of hardware and software orders. This infrastructure is called virtual because the relationships between the entities that compose it do not exist on the material level but from a logical point of view. A grid guarantees non-trivial qualities of service, that is to say it differs from other infrastructures in its ability to respond adequately to requirements (accessibility, availability, reliability, ...) given the power calculation or storage that it can provide. In the following, the transaction processing system or STT (computer domain) is a system capable of executing a set of unit operations in a given transaction context. The TWU must be able to guarantee at any time the inherent properties of a computer transaction for the data it manages. The term "transactional processing" is the French translation of English transaction processing (TP). In the following, a software metric is a compilation of measurements resulting from the technical or functional properties of a software. They can be simple or more complex. They always consist of so-called "basic" measures, for example the number of lines of code, the cyclomatic complexity, the number of comments. In the following, a computer probe is a software associated with equipment that makes it possible to perform, manage and trace to a monitoring equipment measures to inform the quality of network flows or quality of service (QoS). In the following, a batch processing (batch processing in English) is an automatic sequence of a sequence of commands (process) on a computer without intervention of an operator. Once this process is complete (regardless of the result) the computer processes the next batch. Batch processing ends once all batches in the stack (queue) have been executed. Batch processes are mainly used for automated tasks, especially for the management of accounts on the computer park of a company, a university, etc. Work started in batches uses only the processor cycles that are not used by the users. interactive work such as transactional processing. Therefore, batches always have a lower execution priority than interactive jobs, but a higher execution time (time slice) than interactive jobs so that they remain in main memory as long as possible. Why is the time slice more generous for a lot than for an interactive job Because during a read command of the database, the system loads in the buffer several slots so as to make the least possible disk access, knowing that a disk access immediately causes a disk purge batch that will have to wait to return to memory to continue executing. Thus batch processes can be very greedy in material resources. It is thus very frequent that for practical or economic reasons that these treatments are planned at night, WE or during periods of low activity of the transactional treatments. During periods of intense activity these jobs are put in a stack or queue in a batch file. The execution of the batch input file can generate actions as varied as the update of a database, the reconciliation of financial transactions, the sending of emails to the user of a transaction processing requiring the following a batch process, or the production of one or more output file for use in other tasks (batch or other). However, the constant evolution of information systems, and by rebounding the multiplication of batch processing, tends to "overload" and "overpopulate" the periods during which treatments can be executed. It is therefore necessary to carry out a planning. BACKGROUND OF THE INVENTION It is known from the patent application US 2014/0075442 a method for programming the execution of a plurality of batch processes by a computer system, the method comprising the steps of: - reading one or more constraints that hinder the performing the plurality of batch jobs by the computer system, including the SLA constraints and a usual load on the computer system; - grouping the plurality of batch jobs into at least one operating frequency that includes at least one batch processing task; adjusting the at least one operating frequency to a first execution frequency; calculating a charge generated by each batch job (batch job) in the first operating frequency on the computer system based on the start time of each batch job (batch job) to protect against the worst case in which a certain number of batch transactions in such a given period of time is greater than an average number of transactions expected during the busiest of that given period of time; and determining an optimized start time for each batch job in the first operating frequency that responds to one or more constraints and distributes the load of each batch job to the computer system using the calculated load from each batch job and the usual load, one or more constraints, including at least one SLA (Service Level Agreement) constraint. This method requires a frequency of operation which does not optimize according to the constraints, including resources actually used for the batch processing envisaged. Batch schedulers are known but, to date, no solution can optimize the scheduling of batch processes on an infrastructure. The present invention is to optimize the timing of batch processes regardless of frequency, based on their execution constraints, the resources they consume and the resources available on the infrastructure they use. GENERAL DESCRIPTION OF THE INVENTION A first object of the invention consists in a device comprising at least one computer machine and software for implementing a mechanism for creating schedules, batch processes, optimized for a given IT infrastructure characterized in that it is constituted by : - a hardware and software configuration for storing a resource and regulatory repository defining and storing the inter-process scheduling rules - a module for building an optimized execution schedule with all the elements collected by the resource repository and in compliance with the rules of execution determined by the regulatory reference system. According to another feature of the invention, the resource repository consists of: a hardware and software arrangement for measuring, by consumption probes, the empty print of all the machines impacted by the batch processing (s); ; a hardware and software arrangement allowing, on the one hand, the measurement, by consumption probes, of the footprint of each batch process on each machine, by subtracting measured consumption from that of the empty footprint over the same period and on the other hand the calculation of the total footprint on all the machines that the (or) processing (s) impact (s), then the storage of this total footprint in the resource repository. According to another particularity of the invention, the regulatory reference is constituted by the use of a human-machine interface (20) making it possible to create applicable execution rules for each batch processing Ti and to store them in memory for constitute the regulatory reference system. According to another particularity of the invention, the regulatory reference system consists of at least one stored matrix table corresponding to each processing Ti applied to one or more resources of a machine Mj, whose resource number defines a dimension of the matrix table. , the number of allocated time slots defining another dimension of the matrix, and the percentages of use of the resource, the values "101" representative of the non-activity, "0" of the activity of the resource constituting the coefficients of the matrix table or "99" of the non-compatibility of the treatment with another treatment constituting the coefficient of an additional row or column. According to another particularity of the invention, the resource repository is constituted by means of a man-machine interface making it possible to create utilization percentage rules for each resource of a machine applicable for each batch processing and save them in memory to constitute the resource repository. According to another feature of the invention, the resource repository consists of a matrix table stored for each machine Mj, one dimension of which is constituted by the number of different resources of the machine, and the other dimension by the number of time slots. of the machine and the coefficients defining according to the resource and the time slot the state occupancy rates of each equipment of the machine and for the other coefficient "0" the unoccupied state of the machine. resource. According to another feature of the invention, once an execution plan has been drawn up, this predetermined execution plan is executed to check that the set of rules is respected for all the machines involved in the batch processing and that the consumptions resources are at the expected level for each resource. According to another particularity of the invention, which intervenes in the definition of the regulatory reference system makes it possible to: Define and memorize the hourly restrictions of each treatment. • Define and memorize the maximum acceptable limits of resource consumption for each server making up the link chain • Define and memorize the inter-processing scheduling rules According to another particularity of the invention, the defined rules can be as varied and different as: • treatment A must be executed before treatment B, • treatment C must start before a specific time, • treatment D must be finish before a specific time, • the consumption of the resource X of the machine 1 can not exceed Y%, • the machine 2 must be fully available for the user actions from a specific time According to another feature of the invention, the device comprises a second Human Machine Interface (HMI 2) for defining the consumption probes on each machine that back metrics from the use of the resources of the machine, as well as the thresholds of acceptability associated with each probe. Another object of the invention is to provide a method for overcoming one or more disadvantages of the prior art. This object is achieved by a process for optimizing batch processing schedules on a computer infrastructure comprising a plurality of machines, implemented by software running on at least one computer infrastructure machine, characterized in that it comprises : - a step of measuring the empty footprint of all the machines impacted by batch processing; a step of measuring the footprint of each treatment on all the machines that the batch process impacts by subtracting measured consumption from that of the first measurement step over the same period and recording a reference of the needs in terms of of the application chain resources without activity and during execution of the processing; a step of storing execution rules and expected consumption levels; a step of construction of an execution plan with all the elements collected during the preceding steps and in compliance with the registered execution rules; a step of executing the predetermined plan, and verifying that the set of rules is respected and that the resource consumptions are at the expected level. According to another particularity of the invention, the construction of the execution plan is carried out: • from the resource and regulatory repository, by determining the optimum execution plan aiming at reducing the overall execution time while maximizing the use of available resources. • and by determining the level of resources theoretically used during the execution of the entire plan According to another particularity of the invention, the verification is carried out: by comparing the used level of resources with that calculated previously to take into account the impact of the execution of the processing in parallel with if necessary a modification of the regulatory reference system if some rules are not followed during execution. DESCRIPTION OF THE ILLUSTRATIVE FIGURES Other features and advantages of the invention will emerge on reading the description which follows, with reference to the appended figures, which illustrate: FIG. 1, a schematic view of the batch processing scheduling mechanism according to one embodiment of the invention; FIG. 2, the steps of a method of measuring batch processing fingerprint on the IT infrastructure; FIG. 3a, a processing matrix T1 memorized in memory; FIG. 3b an example of processing matrix T1 with an additional line with respect to the number of resources; FIG. 3c is an example of a processing matrix T2 with an additional line comprising the same information as that of T1 in the supplementary line, with respect to the number of resources for defining the incompatibility of its processing at the same time as T1; FIG. 4 an example of a machine matrix Mj; FIG. 5 is an example of the steps of construction of the execution plan on a machine; FIGS. 6a, 6b, 6c, 6d and 6e show examples of five possible matrix definitions for T1 processing. For the sake of clarity, identical or similar elements are marked with identical reference signs throughout the figures. DESCRIPTION OF THE PREFERRED EMBODIMENTS OF THE INVENTION An exemplary embodiment of the invention is illustrated in FIG. 1 and this example represents the architecture of a device comprising at least one computer machine and software for implementing a mechanism for creating schedules, batch processes, optimized for a given IT infrastructure, said device consisting of: - a hardware and software arrangement for storing a resource (1) and a regulatory (2) repository, - a construction module for an execution schedule (3) optimized with all the elements collected by the resource repository (1) and in compliance with the execution rules determined by the regulatory reference system (2). In some embodiments, said resource repository (1) is constituted by a hardware and software arrangement for measuring, by consumption probes (4), the empty fingerprint (11) of all the machines impacted by the or batch treatments. The empty footprint of each machine (11) is thus measured using system probes (4) deployed for the occasion, the Nmon (41) and Perfmon probes (42), without any treatment running. Nmon is a tool that allows among other things to display the CPU, memory, swap, network stats, information about users, groups, storage media, the use of the kernel, the most greedy processes and everything a lot of other very useful things. Perfmon or Performance Monitor is an integrated tool to monitor the performance of a machine and provide a complete report in 60 seconds from any machine Said resource repository (1) is also constituted of a hardware and software arrangement allowing firstly the measurement, by consumption probes, of the imprint of each batch process (12) on each machine, by subtracting consumption measurements of the empty footprint over the same period and, secondly, the calculation of the total footprint on all the machines that the treatment (s) impact (s), then the storage of this total footprint in the resource repository (1) .The fingerprint of each treatment on each machine (12) is thus measured identically to the measurement of the empty fingerprint (11), and by executing the treatments one by one, sequentially. In order to measure the footprint of each treatment on each machine, a method of measuring batch processing fingerprint on the IT infrastructure is implemented as follows (Figure 2). In a step S30, a generated injection file is received, opened and read to determine in step S31, a first batch scenario to be executed (YES). The requests of the scenario are then sent during step S32, according to the scenario and the parameterization thereof. A measurement step S33 is then implemented to recover fingerprint measurement results. Step S31 is then executed again to determine if there is still one or more scenarios to implement, if it is the case (YES), S32 and S33 are again implemented, otherwise (NO ), a step S34 of storing the measurement results is executed. The results obtained and stored are then analyzed during the analysis step S35 and a fingerprint report (for example in matrix form) of each batch process is generated in step S36. These measurements are made by means of a device comprising a memory unit (MEM). This memory unit comprises a random access memory for storing in an unsustainable manner calculation data used during the implementation of a method according to the description above, according to various embodiments. The memory unit furthermore comprises a non-volatile memory (for example of the EEPROM type) for storing, for example, a computer program, according to one embodiment, for its execution by a processor (not shown) of a memory unit. treatment (PROC) of the device. For example, the memory unit may store a template base file or an injection file as previously described. The device also comprises a communication unit (COM), for example to receive injection files and / or performance measurement results, and / or to send requests to a computer infrastructure for which it is desired to define the total footprint of treatments. These measurements make it possible to generate, for example, in the form of a table, each line of which corresponds to a resource and each column to batch processing and for each treatment the CPU, RAM, memory, disk, network, etc. consumptions, as a percentage of the capacities of each resource. Moreover, in certain embodiments of the invention, said regulatory reference is constituted by a human-machine interface (HMI 1) (20) allowing the user to create applicable execution rules for each batch process and to store them in memory to constitute said regulatory frame of reference (2). Said human-machine interface HMI 1 allows the user to: - define and store the time restrictions of each processing, - define and store the maximum acceptable limits of resource consumption for each server making up the link chain, - define and store inter-processing scheduling rules. Indeed, for each treatment, one can have information such as: - The duration of the treatment, - The consumption made by the treatment on each machine, - The order of execution of the treatment, - The start time of the treatment, - the end time of the treatment. In some embodiments, said information can be retranscribed in the form of a processing matrix Ti stored in a memory (31), each row of the processing matrix represents the consumption of a resource, and each value of the line represents the consumption of this resource for an hourly period. For example, for a processing T1, the processing matrix T1 stored in memory (31) can be as in FIG. 3a: The values "101" in FIG. 3a indicate to the plan construction module that the processing T1 can not take place on this time slot. The values "Ο" indicate to the plan construction module that the processing T1 may possibly start or continue on this time slot. The other values 10, 27, 26, 12, 11 of FIG. 3a indicate to the plan construction module the% of consumption of each resource (CPU1, RAM1, etc.) according to the time slots. If the processing T1 is movable over time, as many different processing matrices Ti can be built and stored in memory (31). In some embodiments, two batch processes (Ti, Tl + n) may be linked. Thus, at least two processing arrays (31) having at least one additional line with respect to the number of resources are constructed to indicate to the plan construction module that the processes are linked. For example, in order to avoid a defined rule, such as the overlap of at least two treatments T1 and T2 is prohibited, the user adds at least one line to each processing matrix, said line no longer containing the consumption of a resource for the time slots but an identical value (for example 99) in the first column for the two or more concerned processes and indicate to the plan construction module that for example, the overlap of the treatments T1 and T2 is prohibited. Thus, in an illustrative and non-limiting manner, the matrices of FIGS. 3b and 3c are obtained, for example. According to this example, the execution plan construction module will deduce that the two processes can not execute simultaneously because they have a third identical line within their matrices (T1, Fig. 3b; T2, Fig. 3a). respective treatment. It would be the same if several matrix Ti, ... Ti + n were constructed with the same additional line. Thus the user will be able by the choice of the values introduced in the lines of the matrixes to shift the execution of one treatment after another, to start it at a precise time slot, to make it finish before another time slot. The set of matrices Ti thus formed form the regulatory reference. In some embodiments, the defined rules may be as varied and different so that one or more of the following constraints are satisfied: - processing A must execute before processing B, - processing C must start before a specific time - processing D must end before a specific time, - consumption of resource X of machine 1 can not exceed Y%, - machine 2 must be fully available for user actions from a specific time. Thus the user will create as many matrices as Ti processing and as many matrices as machines Mj belonging to the IT infrastructure on which will execute the processing according to the rules thus defined. In some embodiments, the device includes a second human-machine interface (HMI 2) (10) for defining the consumption probes (4) on each machine that returns the metrics (5) from the use of the resources of the machine, as well as the thresholds of acceptability associated with each probe. Thus, for each machine, one can have information such as: - the maximum resource consumption, - any white periods (time slots without treatment). This information may, preferably, be retranscribed in the form of a matrix Mj machine stored in a memory (32), each line of the machine matrix Mj representing a resource of the machine (CPU, RAM, .... etc. ), each column of the matrix a time range and each value present in a box of the row of a resource representing the consumption of this resource for the time period defined for the column. For example, for a machine Mj composed as an example of a CPU and RAM, the machine matrix (Mj) can be as in FIG. The value coefficients "0" indicate time periods without treatment. The values of values 100, or 95 indicate the desired and respective maximum occupancy rates of each equipment of the machine according to the time slots. It is understood that the machine can incorporate other resources or equipment such as a disk, I / O, network, etc. Thus the row number of the processing or machine matrices will vary according to the resources to be used for batch processing. These MJ matrices stored in memory (32) constitute the resource repository. By the use of processing matrices Ti and machine matrices Mj on which the processing will execute, the program will be able to construct an execution plan. In order to construct an execution plan, the plan construction module launches on a machine matrix computations (33) between the process matrices stored in memory (31) and the machine matrices stored in memory (32). An example of building the execution plan on a machine is as follows (Figure 5): In a step S40, the set of possible matrix definitions for the set of treatments are constructed. These represent the consumption of resources spread over all time slots. For example, five possible matrix definitions for T1 processing are developed (Figures 6a, 6b, 6c, 6d, 6e): In step S41 the definitions are compared with the machine matrix (FIG. 4) in order to measure their compatibility with the machine. The plan construction module then eliminates, in step S42, the non-compatible definitions, in the example presented the treatments of FIGS. 6b and 6c. In step S43, the plan construction module selects a first process. Of the remaining matrix definitions for this processing, a definition is randomly chosen. The plan construction module then recalculates the machine matrix in step S45. If the calculation is successful, that is to say that the machine resources are sufficient for said definition, we memorize the chosen definition. Otherwise, another definition is preselected randomly among the remaining definitions and the plan construction module repeats steps S45 and S46. Once the compatible definition of a stored process, the plan construction module performs steps S43, S44, S45, S46 and S47 until all the processes have been scheduled. At the end of step S48, the plan construction module generates the execution plan, the result of matrix calculations between the stored processing definitions and the machine matrix. By the term human-machine interface, we mean any element allowing a human being to interact with a particular computer and without this list being exhaustive, a keyboard and means in response to commands entered to the keyboard to make displays and optionally select with the mouse or a touchpad elements displayed on the screen. Another embodiment is a touch screen for selecting directly on the screen the elements touched by the finger or an object and possibly with the possibility of displaying a virtual keyboard. A second embodiment is the use of a camera recording the movements of the user's eye and using this information to highlight the element pointed by the user's eye and an additional motion of the user. eye or head or a part of the body or a key operation, or touching an area of the screen validating the selection thus made. Another example of a human-machine interface may be a voice command for displaying elements and then validating the selection of an element. These examples are purely illustrative but not limiting on the possible solutions in the future to select elements and declare them to the activation module and link for the latter to fulfill its task. As can be understood, the present invention is presented as a process mechanism for optimizing batch processing schedules. The first step is to measure the empty footprint of all the machines impacted by batch processing (E1). That is, the consumption of machine resources without any processing being in progress, or any user action taking place. The second step is to measure the footprint of each treatment on all the machines (E2) that it impacts by subtracting measured consumption from that of the first step over the same period. These first two steps make it possible to obtain a repository of the needs in terms of resources (1) of the application chain without activity and during the execution of the processing and to memorize this repository. The third step is to create execution rules (E3). The rules can be as different as: • treatment A must be executed before treatment B, • treatment C must start before a specific time, • treatment D must end before a specific time, • consumption of the resource X of machine 1 can not exceed Y%, • machine 2 must be fully available for user actions from a specific time • etc ... These execution rules can be in the form of matrix tables Ti treatments and MJ machines. The fourth step consists of constructing an execution plan (E4) with all the elements collected for example in matrix form during the preceding steps and in compliance with the execution rules determined in the third step. The fifth step is to play the predetermined execution plan (E5), and to check (E6) that the set of rules is respected and that resource consumption is at the expected level. If some were not, the footprints could be revised up or down, or some rules added to establish a new execution plan that will be replayed in a new iteration of the fifth step. This will be iterated until you have an execution plan that complies with all the execution rules. The invention is not limited to the embodiments previously described by way of example and embodiments corresponding to the scope defined by the claims could be envisaged without departing from the scope of the invention.
权利要求:
Claims (13) [1" id="c-fr-0001] 1. Device comprising at least one computer machine and software for implementing a mechanism for creating schedules, batch processes, optimized for a given computer infrastructure characterized in that it consists of: - a hardware and software arrangement of storage of a resource (1) and regulatory (2) repository, - a construction module of an execution plan (3) optimized with all the elements collected by the resource repository (1) and in compliance with rules of execution determined by the regulatory reference system (2). [2" id="c-fr-0002] 2. Device according to claim 1 characterized in that the resource reference (1) is constituted by: - a hardware and software arrangement for measuring, by consumption probes (4), the vacuum footprint (11) of all machines impacted by the batch processing (s); a hardware and software arrangement allowing on the one hand the measurement, by consumption probes, of the imprint of each batch treatment (12) on each machine, by subtracting measured consumption from that of the vacuum imprint on a same period and on the other hand the calculation of the total footprint on all the machines that the (or) treatment (s) impact (s), then the storage of this total footprint in the resource repository. [3" id="c-fr-0003] 3. Device according to claim 1 or 2 characterized in that the regulatory reference is constituted by the use of a man-machine interface (20) for creating applicable execution rules for each batch processing Ti and the record in memory to constitute the regulatory reference system (2). [4" id="c-fr-0004] 4. Device according to one of claims 1 to 3 characterized in that the regulatory reference consists of at least one stored matrix table corresponding to each Ti treatment applied to one or more resources of a machine MJ, whose number of resource defines a dimension of the matrix table, the number of assigned time slots defining another dimension of the matrix, and the percentages of use of the resource, the values "101" representative of the non-activity, "0" of the the activity of the resource constituting the coefficients of the matrix table or "99" of the non-compatibility of the treatment with another treatment constituting the coefficient of an additional row or column. [5" id="c-fr-0005] 5. Device according to one of claims 1 to 4 characterized in that the resource repository is constituted using a man-machine interface (20) for creating rules of percentages of use of each resource of a machine applicable for each batch process and store them in memory to constitute the resource repository (1). [6" id="c-fr-0006] 6. Device according to claim 5 characterized in that the resource repository consists of a matrix table stored for each machine MJ one dimension is constituted by the number of different resources of the machine, and the other dimension by the number of time slot of the machine and the coefficients defining according to the resource and the time slot the state occupancy rates of each machine equipment and for the other coefficient "0" the unoccupied state of the resource. [7" id="c-fr-0007] 7. Device according to one of claims 1 to 6 characterized in that once an execution plan developed, this predetermined execution plan is executed to verify that the set of rules is respected for all the machines involved in the treatment batch and that resource consumption is at the expected level for each resource. [8" id="c-fr-0008] 8. Device according to claim 3 characterized in that ΓΙΗΜ (20) involved in the definition of the Regulatory Reference (2) allows to: • Define and store the hourly restrictions of each treatment. • Define and memorize the maximum acceptable limits of resource consumption for each server making up the link chain • Define and memorize the inter-processing scheduling rules [9" id="c-fr-0009] 9. Device according to one of claims 1 to 8 characterized in that the defined rules can be as varied and different as: • treatment A must be executed before treatment B, • treatment C must start before a specific time, • processing D must end before a specific time, • consumption of resource X of machine 1 can not exceed Y%, • machine 2 must be fully available for user actions from a specific time [10" id="c-fr-0010] 10. Device according to one of claims 2 to 9 characterized in that it comprises a second Human Machine Interface (HMI 2) (10) for defining the consumption probes (4) on each machine that back metrics (5) from the use of the resources of the machine, as well as the thresholds of acceptability associated with each probe. [11" id="c-fr-0011] 11. A method for optimizing batch processing schedules on a computer infrastructure comprising a plurality of machines, implemented by software running on at least one machine of the IT infrastructure, characterized in that it comprises: a step measuring the empty footprint of all the machines impacted by batch processing (E1); a step of measuring the footprint of each treatment on all the machines that the batch process impacts by subtracting measured consumption from that of the first measurement step over the same period and recording a reference of the needs in terms of application string resources with no activity and when processing (E2); a step of storing execution rules and expected consumption levels (E3); a step of constructing an execution plan with all the elements collected during the preceding steps and in compliance with the registered execution rules (E4); a step of executing the predetermined plan, and verifying that the set of rules is respected and that the resource consumptions are at the expected level (E5). [12" id="c-fr-0012] 12. A method for optimizing batch processing schedules on a computing infrastructure according to claim 11, characterized in that the construction of the execution plan is carried out: from the resource (1) and regulatory (2) repository, determining the optimum execution plan to reduce overall execution time while maximizing the use of available resources. • and by determining the level of resources theoretically used during the execution of the entire plan [13" id="c-fr-0013] 13. A method for optimizing batch processing schedules on an IT infrastructure according to claim 11 or 12, characterized in that the verification (E6) is carried out: by comparing the used level of resources with that calculated previously to take account of the impact of the execution of processing in parallel with if necessary a modification of the regulatory reference (2) if certain rules are not respected during the execution.
类似技术:
公开号 | 公开日 | 专利标题 EP3113022B1|2021-11-03|Batch-processing scheduling mechanism Park et al.2018|Deep learning inference in facebook data centers: Characterization, performance optimizations and hardware implications US10756995B2|2020-08-25|Method, apparatus and system for real-time optimization of computer-implemented application operations using machine learning techniques US10846643B2|2020-11-24|Method and system for predicting task completion of a time period based on task completion rates and data trend of prior time periods in view of attributes of tasks using machine learning models Wetzstein et al.2011|Identifying influential factors of business process performance using dependency analysis US20130219068A1|2013-08-22|Predicting datacenter performance to improve provisioning FR3000578A1|2014-07-04|SHARED CALCULATION SYSTEM AND METHOD USING AUTOMATED SUPPLY OF HETEROGENEOUS COMPUTER RESOURCES Barker et al.2015|Cloud services brokerage: A survey and research roadmap US20150066598A1|2015-03-05|Predicting service delivery costs under business changes Fursin et al.2016|Collective knowledge: Towards R&D sustainability CN103593232B|2017-07-04|The method for scheduling task and device of a kind of data warehouse Xu et al.2015|Making real time data analytics available as a service Mitropoulou et al.2017|Pricing IaaS: A hedonic price index approach EP3113025B1|2018-03-21|Automatic diagnostic mechanism using information from an application monitoring system US20140223410A1|2014-08-07|Application architecture design method, application architecture design system, and recording medium US20160306655A1|2016-10-20|Resource management and allocation using history information stored in application's commit signature log Bruneo et al.2010|Performance evaluation of glite grids through gspns Jin2008|Virtualization technology for computing system: Opportunities and challenges Foroni et al.2018|Moira: A goal-oriented incremental machine learning approach to dynamic resource cost estimation in distributed stream processing systems Singh et al.2019|Triangulation resource provisioning for web applications in cloud computing: A profit-aware approach EP3506094A1|2019-07-03|System and method to optimize the scheduling of batch processes CN103942084B|2017-04-12|Application coexistence analysis method and device in virtualized environment EP3640800A1|2020-04-22|Method for improving the efficiency of use of the resources of an infrastructure designed to execute a scheduling plan JP2021502658A|2021-01-28|Key-based logging for processing structured data items using executable logic US20200394583A1|2020-12-17|Summary path in an interactive graphical interface
同族专利:
公开号 | 公开日 FR3038405B1|2019-04-12| EP3113022A1|2017-01-04| EP3113022B1|2021-11-03|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20050132167A1|2003-12-10|2005-06-16|Giuseppe Longobardi|Workload scheduler with cumulative weighting indexes| US20140075442A1|2006-03-31|2014-03-13|Ebay Inc.|Batch scheduling| US20090089772A1|2007-09-28|2009-04-02|International Business Machines Corporation|Arrangement for scheduling jobs with rules and events| US20090158286A1|2007-12-18|2009-06-18|International Business Machines Corporation|Facility for scheduling the execution of jobs based on logic predicates| US20060020923A1|2004-06-15|2006-01-26|K5 Systems Inc.|System and method for monitoring performance of arbitrary groupings of network infrastructure and applications|FR3061784B1|2017-01-12|2021-11-26|Bull Sas|PROCESS FOR EVALUATING THE PERFORMANCE OF AN APPLICATION CHAIN WITHIN AN IT INFRASTRUCTURE| FR3071335A1|2017-09-20|2019-03-22|Bull Sas|METHOD AND SYSTEM FOR OPTIMIZING THE ORDERING OF BATCH PROCESSES| FR3076370B1|2017-12-30|2020-11-27|Bull Sas|METHOD AND SYSTEM FOR THE OPTIMIZATION OF THE SCHEDULING OF BATCH TREATMENTS| FR3087282B1|2018-10-11|2020-10-09|Bull Sas|PROCESS FOR OPTIMIZING THE MINIMUM SIZING OF AN INFRASTRUCTURE INTENDED TO EXECUTE A SCHEDULE PLAN| EP3640800A1|2018-10-17|2020-04-22|Bull Sas|Method for improving the efficiency of use of the resources of an infrastructure designed to execute a scheduling plan| FR3087556A1|2018-10-17|2020-04-24|Bull Sas|METHOD FOR IMPROVING THE EFFICIENCY OF USE OF THE RESOURCES OF AN INFRASTRUCTURE FOR IMPLEMENTING A SCHEDULING PLAN| FR3091376B1|2018-12-31|2020-12-11|Bull Sas|Process for optimizing a scheduling plan and the sizing of an IT infrastructure|
法律状态:
2016-06-22| PLFP| Fee payment|Year of fee payment: 2 | 2017-01-06| PLSC| Publication of the preliminary search report|Effective date: 20170106 | 2017-06-21| PLFP| Fee payment|Year of fee payment: 3 | 2018-06-21| PLFP| Fee payment|Year of fee payment: 4 | 2019-07-25| PLFP| Fee payment|Year of fee payment: 5 | 2020-07-28| PLFP| Fee payment|Year of fee payment: 6 | 2021-07-26| PLFP| Fee payment|Year of fee payment: 7 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1556274|2015-07-02| FR1556274A|FR3038405B1|2015-07-02|2015-07-02|LOT PROCESSING ORDERING MECHANISM|FR1556274A| FR3038405B1|2015-07-02|2015-07-02|LOT PROCESSING ORDERING MECHANISM| EP16177298.3A| EP3113022B1|2015-07-02|2016-06-30|Batch-processing scheduling mechanism| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|